132 research outputs found

    Optimization Technique for Efficient Dynamic Query Forms with Keyword Search and NoSQL

    Get PDF
    Modern web database as well as scientific database maintains tremendous and heterogeneous that is unstructured data. In order to mine this data traditional data mining technologies cannot work properly. These real word databases may contain hundreds or even thousands of relations and attributes. Latest trends like Big data and cloud computing that leads to the adoption of NoSQL which simply means Not Only SQL. In current scenario Most of the web applications are hosted in cloud and that available through internet. This create explosion in number of concurrent users. So the technique to handle unstructured data is proposed in our work named as Dynamic query forms with nosql. This system presents dynamic query form interface for database exploration of an organization. In this system document oriented NoSQL database is used for that purpose MONGODB is used which support dynamic queries that do not require predefined map reduce function. And further the process generation of a query form is an iterative process guided by user. At each step system automatically generate ranking list of form components and user adds the desired form component into query form and submit queries to view query result. Two traditional measures to evaluate the quality of query result i.e, precision and recall is presented. Quality measures can be derived using overall performance measure as Fscore DOI: 10.17762/ijritcc2321-8169.15070

    Publication of XML documents without Information Leakage with data inference

    Get PDF
    Recent applications are using an increasing need that publishing XML documents should meet precise security requirements. In this paper, we are considering data publishing applications where the publisher specifies what information is more sensitive and should be protected from outside world user. We show that if a given document is published carelessly then users can use common knowledge to guess any information. The goal here is to protect such information in the presence of data inference with common knowledge. The most important feature of XML formatting is it allows for adding schema declarations with integrity constraints to instance data and allow composing individual pieces of data in a tree-like fashion in which a link from a parent node to a sub tree carries some ontological information about the relationship between individual pieces of data This system work as inference problem in XML documents consists of potentially secrets and important information. Our work gives solution for this problem by providing the control mechanism for enforcing inference usability of XML document. Output of our work is again a XML document that is under their inference capabilities which neither contain nor imply any confidential information and it is indistinguishable from the actual XML document. In the proposed work it produced the weaken document which takes the consideration of inference capabilities and according to this modifies there schemas and produce inference proof documents. DOI: 10.17762/ijritcc2321-8169.15077

    Improving efficiency and Lifetime of Mobile Devices Using Cloud Environment

    Get PDF
    Mobile Cloud Computing is process to compute the resources between multiple mobile devices. The mobile users demand a certain level of quality of services (QoS) of their device. If the mobile change the interface gateway in the mobility of device. In this paper we are Identify, Formulate, and address the problem of QoS. The process of Bandwidth sifting and redistribution by interfacing gateway for maximizing their utility. The bandwidth sifting alone is not sufficient for maintaining QoS-guaranteed because of verifying spectral efficiency across the multiple channels.For maximum utility problem we formulate bandwidth redistribution and by using modified descending bid auction solved it. In the AQUM schema as per required amount of bandwidth we generate a big request in each gateway aggregate demand of the entire connecting mobile node. Simulation results establish the correctness of the proposed algorithm. Theoretically, we prove the convergence of AQUM by deduce the maximum and minimum selling prices of bandwidth DOI: 10.17762/ijritcc2321-8169.15076

    Optimal Data Deduplication In Cloud With Authorization

    Get PDF
    Cloud technology is widely used technology as it allows sharing and centralized storage of data, sharing of data processing and online access of computer services and resources on various types of devices. One of the critical challenges of cloud storage services is the management of the ever-increasing volume of data .To address these data deduplication is one of the novel technique. Deduplication helps to remove and prevent from having duplicate copies of same data. Though deduplication has several benefits it adds concerns related to privacy and security of users as it can lead to insider or outsider attacks. Achieving deduplication along with data security in cloud environment makes it more critical problem to solve. Objective of this paper on Optima Authorized Data Deduplication in Cloud is to mention the proposed system and analysis of deduplication techniques and optimal authorization measures for security along with deduplication technique in cloud environment DOI: 10.17762/ijritcc2321-8169.15073

    Formal Semantic Approach to Detect Smart Contract Vulnerabilities Using KEVM

    Get PDF
    Smart contracts are self-executing programs that run on blockchain platforms. While smart contracts offer a range of benefits, such as immutability and transparency, they are not immune to vulnerabilities. Malicious actors can exploit smart contract vulnerabilities to execute unintended actions or access sensitive data[1]. One approach to mitigating smart contract vulnerabilities is formal verification. Formal verification is a method of verifying the correctness of software using mathematical techniques. It involves mathematically proving that a program conforms to a set of specifications. Formal verification can help detect and eliminate vulnerabilities in smart contracts before they are deployed on the blockchain. KEVM (K Framework-based EVM) is a framework that allows for formal verification of smart contracts on the Ethereum Virtual Machine (EVM). KEVM uses the K Framework, a formal semantics framework, to specify the behavior of the EVM. With KEVM, smart contract developers can verify the correctness of their contracts before deployment, reducing the risk of vulnerabilities. In this paper, we have studied smart contract vulnerabilities such as Over usage of Gas, Signature Replay attack, and misuse of fallback function. We have also written the formal specification for these vulnerabilities and executed it using KEVM

    Classification of traffic over collaborative IoT and Cloud platforms using deep learning recurrent LSTM

    Get PDF
    Internet of Things (IoT) and cloud based collaborative platforms are emerging as new infrastructures during recent decades. The classification of network traffic in terms of benign and malevolent traffic is indispensable for IoT-cloud based collaborative platforms to utilize the channel capacity optimally for transmitting the benign traffic and to block the malicious traffic. The traffic classification mechanism should be dynamic and capable enough to classify the network traffic in a quick manner, so that the malevolent traffic can be identified in earlier stages and benign traffic can be channelized to the destined nodes speedily. In this paper, we are presenting deep learning recurrent LSTM based technique to classify the traffic over IoT-cloud platforms. Machine learning techniques (MLTs) have also been employed for comparison of the performance of these techniques with the proposed LSTM RNet classification method. In the proposed research work, network traffic is classified into three classes namely Tor-Normal, NonTor-Normal and NonTor-Malicious traffic. The research outcome shows that the proposed LSTM RNet classify the traffic accurately and also helps in reducing the network latency and in enhancing the data transmission rate as well as network throughput

    Data mining Techniques for Digital Forensic Analysis

    Get PDF
    The computer forensic involve the protection, classification, taking out information and documents the evidence stored as data or magnetically encoded information. But the organizations have an increasing amount of data from many sources like computing peripherals, personal digital assistants (PDA), consumer electronic devices, computer systems, networking equipment and various types of media, among other sources. To find similar kinds of evidences, crimes happened previously, the law enforcement officers, police forces and detective agencies is time consuming and headache. The main motive of this work is by combining a data mining techniques with computer forensic tools to get the data ready for analysis, find crime patterns, understand the mind of the criminal, assist investigation agencies have to be one step ahead of the bad guys, to speed up the process of solving crimes and carry out computer forensics analyses for criminal affairs

    Valuable Feature Improvement of Content Clustering and Categorization via Metadata

    Get PDF
    Every record contains side-data in content mining application. This side data may be of particular sorts, for instance, record derivation information, the links in the record, user access conduct from web logs, or other non text based qualities which are embedded into the content record. With the finished objective of clustering this behaviors (Text) contains huge measure of information. At times it is difficult to estimate the side data, in light of the way that a part of the information is noise. In such cases, it can be risky to combine side-information into the mining method, because it can either upgrade the nature of the illustration for the mining procedure, or can include noise to the approach. As needs be, we oblige a principled way to deal with perform the mining handle, so as to enlarge the inclinations from using this side information. In this subject, here figure k-medoids estimation which vanquishes the problem of k-means computation. We plan an algorithm which consolidates established parceling algorithm with probabilistic models to make a successful clustering methodology. And afterward demonstrate to extend the way to deal with the sorting issue. This general technique is used as a piece of demand to summarize both clustering what as more instruction algorithms. So the use of side-data can massively enhance the way of substance clustering and sorting, while keeping up an unusual state of efficiency. After that we put entire framework in cloud. DOI: 10.17762/ijritcc2321-8169.15078

    ‘Awareness to Amendments to Income Tax’

    Get PDF
    The taxation of incomes generated within India and of incomes generated by Indians overseas is governed by Income Tax Act, 1961. This study aims at presenting a lucid yet simple understanding of taxation structure of an individual’s income in India for the assessment year 2018-19.Every individual should know that tax planning in order to avail all the incentives provided by the Government of India under different statures is legal. This project covers the basics of the Income Tax Act, 1961 as amended by the Finance bill, 2018 and broadly presents the nuances of prudent tax planning and tax saving options provided under these laws. Any other insidious means to avoid or evade tax is a cognizable offence under the Indian constitution and all the citizens should refrain from such acts

    Scalable TPTDS Data Anonymization over Cloud using MapReduce

    Get PDF
    With the rapid advancement of big data digital age, large amount data is collected, mined and published. Data publishing become day today routine activity. Cloud computing is best suitable model to support big data applications. Large number of cloud service need users to share microdata like electronic health records, data containing financial transactions so that they can analyze this data. But one of the major issues in moving toward cloud is privacy threats. Data anonymization techniques are widely used to combat with privacy concerns .Anonymizing data sets using generalization to achieve k-anonymity is one of the privacy preserving techniques. Currently, the scale of data in many cloud applications is increasing massively in accordance with the Big Data tendency, thereby making it a difficult for commonly used software tools to capture, handle, manage and process such large-scale datasets. As a result it is challenge for existing approaches for achieving anonymization for large scale data sets due to their inefficiency to support scalability. This paper presents two phase top down specialization approach to anonymize large scale datasets .This approach uses MapReduce framework on cloud, so that it will be highly scalable and efficient. Here we introduce the scheduling mechanism called Optimized Balanced Scheduling to apply the Anonymization. OBS means individual dataset have the separate sensitive field. Every data set consist of sensitive field and give priority for this sensitive field. Then apply Anonymization on this sensitive field only depending upon the scheduling. DOI: 10.17762/ijritcc2321-8169.15077
    • …
    corecore